Skip to content

Instantly share code, notes, and snippets.

@swayson
swayson / gemini-deep-research-prompt
Created March 8, 2025 08:49
gemini-deep-research-prompt
Core Purpose: To function as a comprehensive AI Research Assistant, producing detailed, well-structured, evidence-based, and unbiased reports on any given topic. The AI must demonstrably adhere to a rigorous research process, prioritizing transparency and meticulousness at every step. The report is not just a final product; it's a record of the research journey.
I. Process Overview (Mandatory, Explicitly Demonstrated Steps):
The AI must follow this multi-step process for every research task. The report must explicitly reflect and document each step, making the process itself a key part of the deliverable.
Topic Deconstruction and Planning (TDP) - Documented Phase
Analyze (with Explicit Listing):
@rohitg00
rohitg00 / llm-wiki.md
Last active April 12, 2026 18:26 — forked from karpathy/llm-wiki.md
LLM Wiki v2 — extending Karpathy's LLM Wiki pattern with lessons from building agentmemory

LLM Wiki v2

A pattern for building personal knowledge bases using LLMs. Extended with lessons from building agentmemory, a persistent memory engine for AI coding agents.

This builds on Andrej Karpathy's original LLM Wiki idea file. Everything in the original still applies. This document adds what we learned running the pattern in production: what breaks at scale, what's missing, and what separates a wiki that stays useful from one that rots.

What the original gets right

The core insight is correct: stop re-deriving, start compiling. RAG retrieves and forgets. A wiki accumulates and compounds. The three-layer architecture (raw sources, wiki, schema) works. The operations (ingest, query, lint) cover the basics. If you haven't read the original, start there.

LLM Wiki

A pattern for building personal knowledge bases using LLMs.

This is an idea file, it is designed to be copy pasted to your own LLM Agent (e.g. OpenAI Codex, Claude Code, OpenCode / Pi, or etc.). Its goal is to communicate the high level idea, but your agent will build out the specifics in collaboration with you.

The core idea

Most people's experience with LLMs and documents looks like RAG: you upload a collection of files, the LLM retrieves relevant chunks at query time, and generates an answer. This works, but the LLM is rediscovering knowledge from scratch on every question. There's no accumulation. Ask a subtle question that requires synthesizing five documents, and the LLM has to find and piece together the relevant fragments every time. Nothing is built up. NotebookLM, ChatGPT file uploads, and most RAG systems work this way.

@R3DIANCE
R3DIANCE / gta-v-house-door-positions.json
Created August 21, 2024 06:22 — forked from MnkyArts/gta-v-house-door-positions.json
GTA V House Coordinates - 600+ Door Positions
[
{
"id": 1,
"street": "Barbareno Rd",
"pos": {
"x": -3254.55,
"y": 1063.99,
"z": 11.1462
}
},
@FabulousCupcake
FabulousCupcake / windows-bluetooth-autoconnect.md
Created August 27, 2021 18:49
Actually Disabling Bluetooth Autoconnect on Startup in Windows

Actually Disabling Bluetooth Autoconnect on Startup in Windows

Situation

I use multiple devices with my bluetooth earphone connected to my mac machine, phone, or tablet.
When I boot my windows machine, it automatically connects to and snatches my bluetooth earphone. This is very annoying.

Results from google on ways to disable this are unhelpful:

  • Unpair the device (???)
  • Disable bluetooth support system service via services.msc
  • Disable the device via Device Manager

This is a truncated mirror of the "Interviewing at Amazon — Leadership Principles" article for those who want to avoid putting in their email to view the article.

Interviewing at Amazon — Leadership Principles

Understanding the Leadership Principles

When I have friends or relatives (or friends of friends, or friends of friends of relatives) ask how to prepare for an interview, I always suggest they read the description of the Amazon Leadership Principles, and think hard about each of them. More than any company I’ve worked with or heard about, we use those principles on a daily basis.

We obviously hire based on the principles. We give both positive and negative feedback which reference the principles. We are encouraged to be aware of our own successes and failures in relation to the leadership principles. I know I’ve certainly referenced a leadership principle or two while talking about p

[{"nldates-obsidian":{"id":"nldates-obsidian","name":"Natural Language Dates","author":"Argentina Ortega Sainz","description":"Create date-links based on natural language.","repo_url":"https://github.com/argenos/nldates-obsidian","marketplace_url":"obsidian://show-plugin?id=nldates-obsidian"}},{"hotkeysplus-obsidian":{"id":"hotkeysplus-obsidian","name":"Hotkeys++","author":"Argentina Ortega Sainz","description":"Additional hotkeys to do common things in Obsidian.","repo_url":"https://github.com/argenos/hotkeysplus-obsidian","marketplace_url":"obsidian://show-plugin?id=hotkeysplus-obsidian"}},{"obsidian-git":{"id":"obsidian-git","name":"Obsidian Git","author":"Vinzent, (Denis Olehov)","description":"Backup your vault with Git","repo_url":"https://github.com/denolehov/obsidian-git","marketplace_url":"obsidian://show-plugin?id=obsidian-git"}},{"url-into-selection":{"id":"url-into-selection","name":"Paste URL into selection","author":"Denis Olehov","description":"Paste URL \"into\" selected text.","repo_url":"htt
@swrneko
swrneko / naive-proxy.guide.md
Last active April 12, 2026 18:21
Naive Proxy Guide

NaiveProxy: Ультимативный гайд по настройке (2026)

📺 Видео-версия гайда


Инструкции по установке

1. Подключаемся к серверу: